information maximization
- Asia > Middle East > Republic of Türkiye (0.15)
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
Graph Contrastive Learning with Augmentations (Appendix) Yuning You
Superpixel graphs (statistics in Table S1) gain from all augmentations except attribute masking as shown in Figure S1. D Difficulty of Contrastive T asks v.s. Pairing "Identical" stands for a no-augmentation baseline for contrastive The baseline training-from-scratch accuracy is 79.71%. Performance on contrastive learning with different implemented subgraph. For subgraph, we propose the following variants with difficulty levels.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Texas > Brazos County > College Station (0.04)
- North America > Canada (0.04)
- (2 more...)
Forget Less, Retain More: A Lightweight Regularizer for Rehearsal-Based Continual Learning
Alssum, Lama, Hammoud, Hasan Abed Al Kader, Alfarra, Motasem, Alcazar, Juan C Leon, Ghanem, Bernard
Deep neural networks suffer from catastrophic forgetting, where performance on previous tasks degrades after training on a new task. This issue arises due to the model's tendency to overwrite previously acquired knowledge with new information. We present a novel approach to address this challenge, focusing on the intersection of memory-based methods and regularization approaches. We formulate a regularization strategy, termed Information Maximization (IM) regularizer, for memory-based continual learning methods, which is based exclusively on the expected label distribution, thus making it class-agnostic. As a consequence, IM regularizer can be directly integrated into various rehearsal-based continual learning methods, reducing forgetting and favoring faster convergence. Our empirical validation shows that, across datasets and regardless of the number of tasks, our proposed regularization strategy consistently improves baseline performance at the expense of a minimal computational overhead. The lightweight nature of IM ensures that it remains a practical and scalable solution, making it applicable to real-world continual learning scenarios where efficiency is paramount. Finally, we demonstrate the data-agnostic nature of our regularizer by applying it to video data, which presents additional challenges due to its temporal structure and higher memory requirements. Despite the significant domain gap, our experiments show that IM regularizer also improves the performance of video continual learning methods.
- Research Report > New Finding (0.47)
- Research Report > Promising Solution (0.34)
- Overview > Innovation (0.34)
- Asia > Middle East > Republic of Türkiye (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > South Carolina > Charleston County > Charleston (0.04)
- (2 more...)
- Asia > Middle East > Republic of Türkiye (0.15)
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
Graph Contrastive Learning with Augmentations (Appendix) Yuning You
Superpixel graphs (statistics in Table S1) gain from all augmentations except attribute masking as shown in Figure S1. D Difficulty of Contrastive T asks v.s. Pairing "Identical" stands for a no-augmentation baseline for contrastive The baseline training-from-scratch accuracy is 79.71%. Performance on contrastive learning with different implemented subgraph. For subgraph, we propose the following variants with difficulty levels.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Texas > Brazos County > College Station (0.04)
- North America > Canada (0.04)
- (2 more...)
- Asia > Middle East > Republic of Türkiye (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > South Carolina > Charleston County > Charleston (0.04)
- (2 more...)
Label-Efficient Self-Supervised Speaker Verification With Information Maximization and Contrastive Learning
State-of-the-art speaker verification systems are inherently dependent on some kind of human supervision as they are trained on massive amounts of labeled data. However, manually annotating utterances is slow, expensive and not scalable to the amount of data available today. In this study, we explore self-supervised learning for speaker verification by learning representations directly from raw audio. The objective is to produce robust speaker embeddings that have small intra-speaker and large inter-speaker variance. Our approach is based on recent information maximization learning frameworks and an intensive data augmentation pre-processing step. We evaluate the ability of these methods to work without contrastive samples before showing that they achieve better performance when combined with a contrastive loss. Furthermore, we conduct experiments to show that our method reaches competitive results compared to existing techniques and can get better performances compared to a supervised baseline when fine-tuned with a small portion of labeled data.
Reviews: Variational Information Maximization for Feature Selection
There are several important issues that I believe must be addressed A. The authors make the following argument against current approaches. Current approaches are optimal under a pair of assumptions. These assumptions are hardly ever satisfied simultaneously. Therefore current approaches are flawed. There is a clear flawed logic here. The only thing shown is that current approaches are not optimal.
Exciting Contact Modes in Differentiable Simulations for Robot Learning
Sathyanarayan, Hrishikesh, Abraham, Ian
In this paper, we explore an approach to actively plan and excite contact modes in differentiable simulators as a means to tighten the sim-to-real gap. We propose an optimal experimental design approach derived from information-theoretic methods to identify and search for information-rich contact modes through the use of contact-implicit optimization. We demonstrate our approach on a robot parameter estimation problem with unknown inertial and kinematic parameters which actively seeks contacts with a nearby surface. We show that our approach improves the identification of unknown parameter estimates over experimental runs by an estimate error reduction of at least $\sim 84\%$ when compared to a random sampling baseline, with significantly higher information gains.
- North America > United States (0.15)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Asia > Japan (0.04)